1 Introduction

This notebook contains

1.1 Packages

The packages necessary to run this notebook are the following.

library(tidyverse) # data cleaning, wrangling, plotting
library(lubridate) # dates handling
library(patchwork) # plot setup
library(plotly)    # interactive plots

# the tidyverts
library(tsibble)
library(feasts)
library(fable)
library(fable.prophet)
library(fasster)

# modeltime
library(tidymodels)
library(modeltime)

1.2 The data

The main data to be used is the daily load from the Western Regional Control Management of CENACE. We also use holidays data as to get insights on the variation.

datos <- read_csv("gcroc.csv")
festivos <- read_csv("festivos_utf8.csv")
datos <- datos %>% 
  left_join(festivos, by = c("fecha" = "fecha"))

datos <- datos %>% 
  mutate(fecha = dmy(fecha),
         dia_semana = factor(dia_semana,
                             levels = c(1:7)),
         Day_type = case_when(
           festivo %in% festivos$festivo ~ "Holiday",
           wday(fecha,week_start = 1) == 1 ~ "Monday",
           wday(fecha, week_start = 1) %in% 1:5 ~ "Weekday",
           TRUE ~ "Weekend"
         ) %>% factor(levels = c("Monday","Weekday",
                                 "Weekend","Holiday")),
         Month = month(fecha, label = TRUE) %>% factor(ordered = F)
         ) %>% 
  as_tsibble(index = fecha) %>% 
  rename(`temp min` = temp_min, 
         `temp max` = temp_max,
         `temp prom` = temp_prom)
datos

2 Exploratory Data Analysis (EDA)

2.1 Visual inspection

We begin by plotting the data in a time plot, as well as in boxplots, first by day type, and then by month, as to check if we can find differences across time.

g1 <- ggplot(datos,aes(x = fecha, y = MWh)) + xlab("")
# Time plot
p1 <- g1 + geom_line(color = "blue") + 
  ggtitle("Historic Load") 
# Day type boxplot
p2 <- g1 + geom_boxplot(aes(x = Day_type, fill = Day_type)) + 
  theme(legend.position = "none",
        axis.text.x =  element_text(angle = 45,
                                      hjust = 1)) + 
  ggtitle("Load Across Day Types")
# Month boxplot
p3 <- g1 + geom_boxplot(aes(x = Month, 
  fill = Month)
  ) + 
  theme(legend.position = "none",
        axis.text.x =  element_text(angle = 45,
                                      hjust = 1)) + 
  ggtitle("Load Across Months")
# plot group 1
pg1 <- p1 / (p2 | p3)

pg1 + plot_annotation(
  title = " Daily Electricity Load in the GCROC"
)

As can be seen from the time plot, the data exhibits a long-term upward trend, as well as indications of seasonality.

By inspecting both boxplots, we see that differences arise both across day types (whether it is a Monday, weekday, weekend or holidays), and across months, further suggesting two types of seasonality: weekly and yearly.

datos  %>%  
  features_if(is_double,features =
                         list(
       Mean = (~ mean(., na.rm = TRUE)),
       `Std. Dev.` = (~ sd(., na.rm = TRUE)),
       Min = (~ min(., na.rm = TRUE)),
       Median = (~ median(., na.rm = TRUE)),
       Max = (~ max(., na.rm = TRUE)),
       N = length)
       ) %>% 
  pivot_longer(everything()) %>% 
  separate(name, into = c("var","stat"), sep = "_") %>% 
  pivot_wider(names_from = "var",values_from = "value")

We check for the relationship between electricity load and temperature across day types.

p4 <- datos %>% 
  filter(year(fecha)>2001) %>% 
  ggplot(aes(x = `temp max`, y = MWh)) + 
  geom_point(aes(color = Day_type)) + 
  ggtitle("Daily electricity load as a function of maximum temperature") + 
  xlab("(C°)") + 
  geom_smooth(method = "lm", formula = y ~ x + I(x^2)) +
  labs(color = "") +
  theme(legend.position = "top")
  

p4

The data show a u-shape trend. Whenever the maximum temperature is low, the energy demand seems to be higher. The same happens and even in a greater degree when the maximum temperature is high.

From the image above we can see that holidays and weekends seem to be on the lower part of the plot, while mondays and weekdays tend to cluster higher. The separation between them is not as clear, and it could be because of the upward trend over the years.

We could plot each year separately to get a better look at it, or remove the trend from the series.

Plotting each year separately yields the plot below.

p4 + facet_wrap(~ factor(year(fecha))) 

Removing the trend using an STL decomposition:

p5 <- datos %>%
  mutate(detrend = MWh - datos %>% 
           model(STL(MWh)) %>% 
           components() %>% 
           pull(trend)
           ) %>% 
  ggplot(aes(x = `temp max`, y = detrend)) + 
  geom_point(aes(color = Day_type)) + 
  ggtitle("Detrended daily electricity load as a function of maximum temperature") + 
  xlab("(C°)") +
  labs(color = "") +
  theme(legend.position = "top")

p5 + 
  geom_smooth(method = "lm", formula = y ~ x + I(x^2))

p5 + 
  geom_smooth(method = "lm", formula = y ~ x + I(x^2),
              aes(color = Day_type)) 

In both cases, we can now see clearer clusters forming, supporting what we saw on the boxplots. Also, adding a global trend line or a trend line for each day type shows a similar pattern.

2.2 Data transformations

We check to see whether performing a transformation to our data could help us reduce irrelevant variation.

First, we take the log values.

p1_log <- g1 + geom_line(aes(y = log(MWh)), color = "orchid2") + 
  ggtitle("Log of historic Load") + ylab("log(MWh)")

p1 / p1_log

Now, we try using a Box-Cox transformation, choosing lambda with the Guerrero feature.

lambda <- datos %>% 
  slice_head(n = nrow(datos) - 30) %>% 
  features(MWh, features = guerrero) %>%
  pull(lambda_guerrero)

lambda
[1] 0.1032301
p1_boxcox <- g1 + geom_line(aes(y = box_cox(MWh, lambda = lambda)), color = "turquoise2") + 
  ggtitle("Historic Load with a Box-Cox transformation") + ylab(paste0("Box-Cox using lambda = ",round(lambda,digits = 4)))

p1 / p1_boxcox

3 R Packages for modeling and forecasting

We will produce forecasts using a wide variety of models, with two different R packages: fable and modeltime.

3.1 fable and the tidyverts ecosystem

A generalized function to obtain forecasts via fable is shown below. One must specify the model with as much customization as needed.

fable_workflow <- function(data = datos, model, lag = 10, dof = 0, horizon = 30, year = 2018){
  # Splitting data into train and test sets
  train <- data %>% 
    slice_head(n = (nrow(data) - horizon))

  test <- datos %>% 
    slice_tail(n = horizon)

  # fitting the model
  fable <- train %>%
    model(model)
  
  print("The fitted model:")
  fable %>% 
    report()
  
  print("Training accuracy:")
  fable %>% 
    fabletools::accuracy() %>% 
    print()
  
  # Residual diagnostics:
  resids <- fable %>% 
    gg_tsresiduals() +
    ggtitle("Residual diagnostics")
  print(resids)
  
  print("Portmanteau tests for autocorrelation")
  fable %>% 
    augment() %>% 
    features(.resid, ljung_box, lag = lag, dof = dof) %>% 
    print()
  
  
  print("The forecast:")
  fable_fcst <- fable %>% 
    forecast(h = horizon)
  print(fable_fcst)
  
  # Plotting the forecast:
  fcst <- 
    # Historic data vs. forecast
    fable_fcst %>% 
    autoplot(data) / 
    # recent history vs. forecast
    fable_fcst %>% 
    autoplot(data %>% filter(year(fecha)>year)) +
    plot_annotation(title = "The Forecast")
  print(fcst)
  
  print("Forecast accuracy:")
  fable_fcst %>% 
    fabletools::accuracy(test) %>% 
    print()
  
}

3.2 modeltime: and extension to the tidymodels

4 The models

Below we implement many of the models available in both packages.

4.1 Time series decomposition

4.1.1 .

There are many decomposition methods.

4.1.2 fable

4.1.2.1 Classical decomposition

datos %>% 
  model(classical_decomposition(MWh, type = "multiplicative")) %>% 
  components() %>% 
  autoplot() + xlab("year") +
  ggtitle("Classical additive decomposition")

4.1.2.2 STL decomposition

An STL decomposition is obtained with the following code.

datos %>% 
  model(STL(MWh)) %>% 
  components() %>% 
  autoplot() + xlab("year") +
  ggtitle("STL decomposition")

The multiple STL decomposition can be customized. In this case, we want to make the trend smoother.

datos %>% 
  model(STL(MWh ~ trend(window = 183))) %>% 
  components() %>% 
  autoplot() + xlab("year") +
  ggtitle("STL decomposition")

We could set the seasonality to be fixed specifying season(window = 'periodic').

datos %>% 
  model(STL(MWh ~ trend(window = 183) + 
              season(window = 'periodic'))) %>% 
  components() %>% 
  autoplot() + xlab("year") +
  ggtitle("STL decomposition")

  • Modeling and forecasting using STL and ARIMA:
fable_workflow(data = datos, 
               model = decomposition_model(
                 STL(box_cox(MWh, lambda = lambda) ~ trend(window = 183)),
                 ARIMA(season_adjust ~ PDQ(0,0,0))
               ))
Non-integer lag orders for random walk models are not supported. Rounding to the nearest integer.
[1] "The fitted model:"
Series: MWh 
Model: STL decomposition model 
Transformation: box_cox(.x, lambda = lambda) 
Combination: season_adjust + season_year + season_week

======================================================

Series: season_adjust + season_year 
Model: COMBINATION 
Combination: season_adjust + season_year

========================================

Series: season_adjust 
Model: ARIMA(1,1,1) w/ drift 

Coefficients:
         ar1      ma1  constant
      0.6907  -0.9618     1e-04
s.e.  0.0126   0.0057     0e+00

sigma^2 estimated as 0.006832:  log likelihood=6924.97
AIC=-13841.93   AICc=-13841.93   BIC=-13814.85

Series: season_year 
Model: SNAIVE 

sigma^2: 1e-04 


Series: season_week 
Model: SNAIVE 

sigma^2: 1e-04 

[1] "Training accuracy:"
[1] "Portmanteau tests for autocorrelation"
[1] "The forecast:"
[1] "Forecast accuracy:"

  • Using STL and ETS:
fable_workflow(data = datos, 
               model = decomposition_model(
                 STL(box_cox(MWh, lambda = lambda) ~ trend(window = 183)),
                 ETS(season_adjust ~ season("N"))
               ))
Non-integer lag orders for random walk models are not supported. Rounding to the nearest integer.
[1] "The fitted model:"
Series: MWh 
Model: STL decomposition model 
Transformation: box_cox(.x, lambda = lambda) 
Combination: season_adjust + season_year + season_week

======================================================

Series: season_adjust + season_year 
Model: COMBINATION 
Combination: season_adjust + season_year

========================================

Series: season_adjust 
Model: ETS(A,N,N) 
  Smoothing parameters:
    alpha = 0.7406011 

  Initial states:
        l
 22.38522

  sigma^2:  0.0075

     AIC     AICc      BIC 
25000.30 25000.31 25020.62 

Series: season_year 
Model: SNAIVE 

sigma^2: 1e-04 


Series: season_week 
Model: SNAIVE 

sigma^2: 1e-04 

[1] "Training accuracy:"
[1] "Portmanteau tests for autocorrelation"
[1] "The forecast:"
[1] "Forecast accuracy:"

  • Using STL, ARIMA, fourier:
fable_workflow(data = datos, 
               model = decomposition_model(
                 STL(box_cox(MWh, lambda = lambda) ~ trend(window = 183)),
                 ARIMA(season_adjust ~ PDQ(0,0,0)),
                 TSLM(season_year ~ fourier(K = 10, period = "1 year")),
                 TSLM(season_week ~ fourier(K = 2, period = "1 week"))
               ))
[1] "The fitted model:"
Series: MWh 
Model: STL decomposition model 
Transformation: box_cox(.x, lambda = lambda) 
Combination: season_adjust + season_year + season_week

======================================================

Series: season_adjust + season_year 
Model: COMBINATION 
Combination: season_adjust + season_year

========================================

Series: season_adjust 
Model: ARIMA(1,1,1) w/ drift 

Coefficients:
         ar1      ma1  constant
      0.6907  -0.9618     1e-04
s.e.  0.0126   0.0057     0e+00

sigma^2 estimated as 0.006832:  log likelihood=6924.97
AIC=-13841.93   AICc=-13841.93   BIC=-13814.85

Series: season_year 
Model: TSLM 

Residuals:
      Min        1Q    Median        3Q       Max 
-0.634312 -0.032274  0.002703  0.038800  0.251003 

Coefficients:
                                            Estimate Std. Error
(Intercept)                                0.0001739  0.0009968
fourier(K = 10, period = "1 year")C1_365  -0.1204390  0.0014061
fourier(K = 10, period = "1 year")S1_365   0.1026533  0.0014132
fourier(K = 10, period = "1 year")C2_365  -0.0474449  0.0014099
fourier(K = 10, period = "1 year")S2_365  -0.0613543  0.0014094
fourier(K = 10, period = "1 year")C3_365  -0.0007444  0.0014101
fourier(K = 10, period = "1 year")S3_365   0.0301872  0.0014092
fourier(K = 10, period = "1 year")C4_365  -0.0629057  0.0014087
fourier(K = 10, period = "1 year")S4_365  -0.0167954  0.0014106
fourier(K = 10, period = "1 year")C5_365   0.0232605  0.0014090
fourier(K = 10, period = "1 year")S5_365   0.0026676  0.0014104
fourier(K = 10, period = "1 year")C6_365  -0.0301078  0.0014098
fourier(K = 10, period = "1 year")S6_365  -0.0103454  0.0014095
fourier(K = 10, period = "1 year")C7_365  -0.0225418  0.0014093
fourier(K = 10, period = "1 year")S7_365   0.0274436  0.0014100
fourier(K = 10, period = "1 year")C8_365  -0.0282852  0.0014089
fourier(K = 10, period = "1 year")S8_365   0.0234484  0.0014104
fourier(K = 10, period = "1 year")C9_365  -0.0175749  0.0014094
fourier(K = 10, period = "1 year")S9_365   0.0099437  0.0014098
fourier(K = 10, period = "1 year")C10_365 -0.0313510  0.0014094
fourier(K = 10, period = "1 year")S10_365  0.0096697  0.0014095
                                          t value Pr(>|t|)    
(Intercept)                                 0.174   0.8615    
fourier(K = 10, period = "1 year")C1_365  -85.655  < 2e-16 ***
fourier(K = 10, period = "1 year")S1_365   72.638  < 2e-16 ***
fourier(K = 10, period = "1 year")C2_365  -33.651  < 2e-16 ***
fourier(K = 10, period = "1 year")S2_365  -43.532  < 2e-16 ***
fourier(K = 10, period = "1 year")C3_365   -0.528   0.5976    
fourier(K = 10, period = "1 year")S3_365   21.422  < 2e-16 ***
fourier(K = 10, period = "1 year")C4_365  -44.656  < 2e-16 ***
fourier(K = 10, period = "1 year")S4_365  -11.906  < 2e-16 ***
fourier(K = 10, period = "1 year")C5_365   16.509  < 2e-16 ***
fourier(K = 10, period = "1 year")S5_365    1.891   0.0586 .  
fourier(K = 10, period = "1 year")C6_365  -21.356  < 2e-16 ***
fourier(K = 10, period = "1 year")S6_365   -7.340 2.40e-13 ***
fourier(K = 10, period = "1 year")C7_365  -15.995  < 2e-16 ***
fourier(K = 10, period = "1 year")S7_365   19.464  < 2e-16 ***
fourier(K = 10, period = "1 year")C8_365  -20.076  < 2e-16 ***
fourier(K = 10, period = "1 year")S8_365   16.626  < 2e-16 ***
fourier(K = 10, period = "1 year")C9_365  -12.470  < 2e-16 ***
fourier(K = 10, period = "1 year")S9_365    7.053 1.93e-12 ***
fourier(K = 10, period = "1 year")C10_365 -22.244  < 2e-16 ***
fourier(K = 10, period = "1 year")S10_365   6.861 7.50e-12 ***
---
Signif. codes:  0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1

Residual standard error: 0.08001 on 6426 degrees of freedom
Multiple R-squared: 0.7676, Adjusted R-squared: 0.7669
F-statistic:  1061 on 20 and 6426 DF, p-value: < 2.22e-16


Series: season_week 
Model: TSLM 

Residuals:
      Min        1Q    Median        3Q       Max 
-0.209075 -0.051443  0.004141  0.053203  0.174777 

Coefficients:
                                        Estimate Std. Error t value
(Intercept)                            1.672e-06  8.368e-04   0.002
fourier(K = 2, period = "1 week")C1_7  1.834e-01  1.183e-03 154.979
fourier(K = 2, period = "1 week")S1_7 -9.024e-02  1.183e-03 -76.253
fourier(K = 2, period = "1 week")C2_7 -7.645e-02  1.183e-03 -64.598
fourier(K = 2, period = "1 week")S2_7  9.708e-02  1.183e-03  82.034
                                      Pr(>|t|)    
(Intercept)                              0.998    
fourier(K = 2, period = "1 week")C1_7   <2e-16 ***
fourier(K = 2, period = "1 week")S1_7   <2e-16 ***
fourier(K = 2, period = "1 week")C2_7   <2e-16 ***
fourier(K = 2, period = "1 week")S2_7   <2e-16 ***
---
Signif. codes:  0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1

Residual standard error: 0.06719 on 6442 degrees of freedom
Multiple R-squared: 0.8635, Adjusted R-squared: 0.8634
F-statistic: 1.018e+04 on 4 and 6442 DF, p-value: < 2.22e-16

[1] "Training accuracy:"
[1] "Portmanteau tests for autocorrelation"
[1] "The forecast:"
[1] "Forecast accuracy:"

4.1.3 modeltime

4.2 Exponential smoothing

4.2.1 .

4.2.2 fable

fable_workflow(model = ETS(MWh, opt_crit = "mse"),
               horizon = 30,
               year = 2018)
[1] "The fitted model:"
Series: MWh 
Model: ETS(A,N,A) 
  Smoothing parameters:
    alpha = 0.9533124 
    gamma = 0.04265283 

  Initial states:
      l       s1       s2       s3       s4        s5        s6        s7
 106379 4727.049 5687.695 5698.116 4774.703 -1353.729 -16734.81 -2799.022

  sigma^2:  24853134

     AIC     AICc      BIC 
166342.7 166342.8 166410.4 
[1] "Training accuracy:"
[1] "Portmanteau tests for autocorrelation"
[1] "The forecast:"
[1] "Forecast accuracy:"

4.2.3 modeltime

4.3 ARIMA

4.3.1 .

4.3.2 fable

fable_workflow(model = ARIMA(MWh),
               horizon = 30,
               year = 2018)
[1] "The fitted model:"
Series: MWh 
Model: ARIMA(1,0,2)(2,1,0)[7] 

Coefficients:
         ar1     ma1      ma2     sar1     sar2
      0.7351  0.1529  -0.0757  -0.4803  -0.2884
s.e.  0.0158  0.0208   0.0177   0.0122   0.0120

sigma^2 estimated as 26308527:  log likelihood=-64152.01
AIC=128316   AICc=128316   BIC=128356.6
[1] "Training accuracy:"
[1] "Portmanteau tests for autocorrelation"
[1] "The forecast:"
[1] "Forecast accuracy:"

4.3.3 modeltime

4.4 Linear regression

4.4.1 .

4.4.2 fable

fable_workflow(model = TSLM(MWh ~ trend() + season()),
               horizon = 30,
               year = 2018)
[1] "The fitted model:"
Series: MWh 
Model: TSLM 

Residuals:
   Min     1Q Median     3Q    Max 
-70875  -5795   -198   6542  32277 

Coefficients:
                Estimate Std. Error t value Pr(>|t|)    
(Intercept)    1.209e+05  4.050e+02 298.433   <2e-16 ***
trend()        1.228e+01  6.880e-02 178.438   <2e-16 ***
season()week2 -9.643e+02  4.791e+02  -2.013   0.0442 *  
season()week3 -7.284e+03  4.791e+02 -15.204   <2e-16 ***
season()week4 -2.364e+04  4.791e+02 -49.343   <2e-16 ***
season()week5 -7.054e+03  4.791e+02 -14.724   <2e-16 ***
season()week6 -9.198e+02  4.791e+02  -1.920   0.0549 .  
season()week7  1.202e+01  4.791e+02   0.025   0.9800    
---
Signif. codes:  0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1

Residual standard error: 10280 on 6439 degrees of freedom
Multiple R-squared: 0.847,  Adjusted R-squared: 0.8469
F-statistic:  5094 on 7 and 6439 DF, p-value: < 2.22e-16
[1] "Training accuracy:"
[1] "Portmanteau tests for autocorrelation"
[1] "The forecast:"
[1] "Forecast accuracy:"

4.4.3 modeltime

4.5 Dynamic regression

4.5.1 .

4.5.2 fable

fable_workflow(model = ARIMA(MWh ~ trend() +
                               season() +
                               PDQ(0,0,0)),
               horizon = 30,
               year = 2018)
[1] "The fitted model:"
Series: MWh 
Model: LM w/ ARIMA(2,0,4) errors 

Coefficients:
         ar1     ar2     ma1      ma2      ma3      ma4  trend()  season()week2
      0.1854  0.7649  0.7259  -0.3654  -0.4059  -0.1923  12.2789      -966.8743
s.e.  0.0250  0.0239  0.0271   0.0165   0.0200   0.0128   0.4931       174.9094
      season()week3  season()week4  season()week5  season()week6  season()week7
         -7271.1371    -23630.2506     -7047.1312      -915.5542        15.3954
s.e.       240.0885       266.6915       266.6923       240.0921       174.9282
       intercept
      120848.325
s.e.    1847.866

sigma^2 estimated as 23596563:  log likelihood=-63865.91
AIC=127761.8   AICc=127761.9   BIC=127863.4
[1] "Training accuracy:"
[1] "Portmanteau tests for autocorrelation"
[1] "The forecast:"
[1] "Forecast accuracy:"

4.5.3 modeltime

4.6 Harmonic dynamic regression

4.6.1 .

4.6.2 fable

fable_workflow(model = ARIMA(box_cox(MWh, lambda) ~ fourier(period = "1 week", K = 2) + 
                               fourier(period = "1 year", K = 15) +
                               PDQ(0,0,0)),
               horizon = 30,
               year = 2018)
[1] "The fitted model:"
Series: MWh 
Model: LM w/ ARIMA(1,1,5) errors 
Transformation: box_cox(.x, lambda) 

Coefficients:
          ar1      ma1     ma2      ma3     ma4      ma5
      -0.2124  -0.3472  0.0252  -0.4941  0.1705  -0.2571
s.e.   0.0317   0.0303  0.0185   0.0105  0.0175   0.0145
      fourier(period = "1 week", K = 2)C1_7  fourier(period = "1 week", K = 2)S1_7
                                     0.1861                                -0.0917
s.e.                                 0.0027                                 0.0027
      fourier(period = "1 week", K = 2)C2_7  fourier(period = "1 week", K = 2)S2_7
                                    -0.0767                                 0.0973
s.e.                                 0.0017                                 0.0017
      fourier(period = "1 year", K = 15)C1_365  fourier(period = "1 year", K = 15)S1_365
                                       -0.1136                                     0.111
s.e.                                    0.0120                                     0.012
      fourier(period = "1 year", K = 15)C2_365  fourier(period = "1 year", K = 15)S2_365
                                       -0.0502                                   -0.0581
s.e.                                    0.0074                                    0.0074
      fourier(period = "1 year", K = 15)C3_365  fourier(period = "1 year", K = 15)S3_365
                                       -0.0027                                    0.0295
s.e.                                    0.0061                                    0.0061
      fourier(period = "1 year", K = 15)C4_365  fourier(period = "1 year", K = 15)S4_365
                                       -0.0645                                   -0.0171
s.e.                                    0.0056                                    0.0056
      fourier(period = "1 year", K = 15)C5_365  fourier(period = "1 year", K = 15)S5_365
                                        0.0202                                    0.0004
s.e.                                    0.0054                                    0.0054
      fourier(period = "1 year", K = 15)C6_365  fourier(period = "1 year", K = 15)S6_365
                                       -0.0276                                   -0.0097
s.e.                                    0.0052                                    0.0052
      fourier(period = "1 year", K = 15)C7_365  fourier(period = "1 year", K = 15)S7_365
                                       -0.0244                                    0.0242
s.e.                                    0.0051                                    0.0051
      fourier(period = "1 year", K = 15)C8_365  fourier(period = "1 year", K = 15)S8_365
                                       -0.0273                                    0.0243
s.e.                                    0.0051                                    0.0051
      fourier(period = "1 year", K = 15)C9_365  fourier(period = "1 year", K = 15)S9_365
                                       -0.0188                                    0.0114
s.e.                                    0.0050                                    0.0050
      fourier(period = "1 year", K = 15)C10_365
                                        -0.0299
s.e.                                     0.0049
      fourier(period = "1 year", K = 15)S10_365
                                         0.0095
s.e.                                     0.0050
      fourier(period = "1 year", K = 15)C11_365
                                        -0.0158
s.e.                                     0.0049
      fourier(period = "1 year", K = 15)S11_365
                                         0.0196
s.e.                                     0.0049
      fourier(period = "1 year", K = 15)C12_365
                                        -0.0198
s.e.                                     0.0049
      fourier(period = "1 year", K = 15)S12_365
                                         0.0077
s.e.                                     0.0049
      fourier(period = "1 year", K = 15)C13_365
                                        -0.0146
s.e.                                     0.0048
      fourier(period = "1 year", K = 15)S13_365
                                         0.0111
s.e.                                     0.0048
      fourier(period = "1 year", K = 15)C14_365
                                        -0.0141
s.e.                                     0.0048
      fourier(period = "1 year", K = 15)S14_365
                                         0.0163
s.e.                                     0.0048
      fourier(period = "1 year", K = 15)C15_365
                                        -0.0043
s.e.                                     0.0047
      fourier(period = "1 year", K = 15)S15_365  intercept
                                         0.0112      3e-04
s.e.                                     0.0048      1e-04

sigma^2 estimated as 0.01787:  log likelihood=3843.61
AIC=-7603.23   AICc=-7602.66   BIC=-7318.84
[1] "Training accuracy:"
[1] "Portmanteau tests for autocorrelation"
[1] "The forecast:"
[1] "Forecast accuracy:"

fable_workflow(model = ARIMA(MWh ~ fourier(period = "1 week", K = 2) + 
                               fourier(period = "1 year", K = 15) +
                               PDQ(0,0,0)),
               horizon = 30,
               year = 2018)
[1] "The fitted model:"
Series: MWh 
Model: LM w/ ARIMA(1,1,5) errors 

Coefficients:
          ar1      ma1     ma2      ma3     ma4      ma5
      -0.2156  -0.3032  0.0212  -0.4937  0.1511  -0.2752
s.e.   0.0302   0.0286  0.0178   0.0103  0.0166   0.0153
      fourier(period = "1 week", K = 2)C1_7  fourier(period = "1 week", K = 2)S1_7
                                  8249.0776                             -4040.5453
s.e.                               122.3407                               122.3205
      fourier(period = "1 week", K = 2)C2_7  fourier(period = "1 week", K = 2)S2_7
                                 -3295.3898                              4220.0251
s.e.                                72.3747                                72.3801
      fourier(period = "1 year", K = 15)C1_365  fourier(period = "1 year", K = 15)S1_365
                                    -5253.9248                                 4909.4761
s.e.                                  538.4727                                  538.0435
      fourier(period = "1 year", K = 15)C2_365  fourier(period = "1 year", K = 15)S2_365
                                    -2178.5503                                -2671.6256
s.e.                                  332.0562                                  332.1148
      fourier(period = "1 year", K = 15)C3_365  fourier(period = "1 year", K = 15)S3_365
                                       56.5035                                 1386.2932
s.e.                                  277.0965                                  277.6884
      fourier(period = "1 year", K = 15)C4_365  fourier(period = "1 year", K = 15)S4_365
                                    -2849.5388                                 -827.5499
s.e.                                  254.9897                                  255.5476
      fourier(period = "1 year", K = 15)C5_365  fourier(period = "1 year", K = 15)S5_365
                                     1073.2311                                   58.1184
s.e.                                  244.0289                                  244.1479
      fourier(period = "1 year", K = 15)C6_365  fourier(period = "1 year", K = 15)S6_365
                                    -1266.1608                                 -444.8868
s.e.                                  237.4888                                  237.5181
      fourier(period = "1 year", K = 15)C7_365  fourier(period = "1 year", K = 15)S7_365
                                     -905.8287                                  1023.350
s.e.                                  233.0168                                   233.269
      fourier(period = "1 year", K = 15)C8_365  fourier(period = "1 year", K = 15)S8_365
                                    -1130.9984                                 1046.9334
s.e.                                  229.8515                                  230.0595
      fourier(period = "1 year", K = 15)C9_365  fourier(period = "1 year", K = 15)S9_365
                                     -836.6662                                  482.3220
s.e.                                  227.4181                                  227.4204
      fourier(period = "1 year", K = 15)C10_365
                                     -1247.5351
s.e.                                   225.2296
      fourier(period = "1 year", K = 15)S10_365
                                       376.1332
s.e.                                   225.2962
      fourier(period = "1 year", K = 15)C11_365
                                      -642.8160
s.e.                                   223.2337
      fourier(period = "1 year", K = 15)S11_365
                                       753.0371
s.e.                                   223.4089
      fourier(period = "1 year", K = 15)C12_365
                                      -810.9698
s.e.                                   221.4742
      fourier(period = "1 year", K = 15)S12_365
                                       342.5265
s.e.                                   221.5219
      fourier(period = "1 year", K = 15)C13_365
                                      -604.0940
s.e.                                   219.7441
      fourier(period = "1 year", K = 15)S13_365
                                       436.8440
s.e.                                   219.7255
      fourier(period = "1 year", K = 15)C14_365
                                      -558.3188
s.e.                                   217.9458
      fourier(period = "1 year", K = 15)S14_365
                                       702.6378
s.e.                                   218.0272
      fourier(period = "1 year", K = 15)C15_365
                                      -157.1762
s.e.                                   216.1706
      fourier(period = "1 year", K = 15)S15_365  intercept
                                       408.9707    12.9355
s.e.                                   216.2393     5.9488

sigma^2 estimated as 33508727:  log likelihood=-64972.95
AIC=130029.9   AICc=130030.5   BIC=130314.3
[1] "Training accuracy:"
[1] "Portmanteau tests for autocorrelation"
[1] "The forecast:"
[1] "Forecast accuracy:"

4.6.3 modeltime

4.7 Prophet

4.7.1 .

4.7.2 fable

fable_workflow(model = prophet(MWh),
               horizon = 30,
               year = 2018)
[1] "The fitted model:"
Series: MWh 
Model: prophet 

A model specific report is not available for this model class.[1] "Training accuracy:"
[1] "Portmanteau tests for autocorrelation"
[1] "The forecast:"
[1] "Forecast accuracy:"

4.7.3 modeltime

4.8 FASSTER

4.8.1 .

4.8.2 fable

fable_workflow(model = fasster(MWh ~ fourier(period = "1 week", K = 2) +
                                 fourier(period = "1 year", K = 10)),
               horizon = 30,
               year = 2018)
[1] "The fitted model:"
Series: MWh 
Model: FASSTER 

Estimated variances:
 State noise variances (W):
  fourier(period = "1 week", K = 2)
   4.7028e-11 1.6002e-10 5.7203e-11 4.0645e-11
  fourier(period = "1 year", K = 10)
   4.5689e-04 1.6029e-04 4.3514e-04 4.5226e-04 6.1666e-04 4.8102e-04 5.0731e-04 5.0293e-04 4.3198e-04 5.8838e-04 5.4549e-04 4.4866e-04 2.7417e-04 2.9641e-04 1.1877e-04 1.4319e-04 3.3793e-05 3.6048e-05 1.4243e-06 2.1692e-06

 Observation noise variance (V):
  6.0478e+08[1] "Training accuracy:"
[1] "Portmanteau tests for autocorrelation"
[1] "The forecast:"
[1] "Forecast accuracy:"

4.8.3 modeltime

4.9 Neural network autorregresion

4.9.1 .

4.9.2 fable

fable_workflow(model = decomposition_model(
  STL(MWh ~ trend(window = 183)),
  NNETAR(season_adjust)
),
               horizon = 30,
               year = 2018)
Non-integer lag orders for random walk models are not supported. Rounding to the nearest integer.
[1] "The fitted model:"
Series: MWh 
Model: STL decomposition model 
Combination: season_adjust + season_year + season_week

======================================================

Series: season_adjust + season_year 
Model: COMBINATION 
Combination: season_adjust + season_year

========================================

Series: season_adjust 
Model: NNAR(37,1,19)[7] 

Average of 20 networks, each of which is
a 37-19-1 network with 742 weights
options were - linear output units 

sigma^2 estimated as 6385944

Series: season_year 
Model: SNAIVE 

sigma^2: 288972.6866 


Series: season_week 
Model: SNAIVE 

sigma^2: 133957.794 

[1] "Training accuracy:"
[1] "Portmanteau tests for autocorrelation"
[1] "The forecast:"
[1] "Forecast accuracy:"

fable_workflow(model = NNETAR(box_cox(MWh, lambda = lambda)),
               horizon = 30,
               year = 2018)
[1] "The fitted model:"
Series: MWh 
Model: NNAR(36,1,18)[7] 
Transformation: box_cox(.x, lambda = lambda) 

Average of 20 networks, each of which is
a 36-18-1 network with 685 weights
options were - linear output units 

sigma^2 estimated as 0.007014
[1] "Training accuracy:"
[1] "Portmanteau tests for autocorrelation"
[1] "The forecast:"
[1] "Forecast accuracy:"

4.9.3 modeltime

4.10 The proposed model

5 Comparison across models

5.1 Training adjustment

5.2 Forecasting adjustment

5.3 Cross-validation

5.4 Forecasts

6 Conclusions

---
title: "Forecasting daily electricity load using `fable` and `modeltime`"
subtitle: "IDI-II. PhD in Engineering Sciences"
date: 2020-08-20
author: "Pablo Benavides-Herrera"
output: 
  html_notebook:
    toc: TRUE
    toc_float: TRUE
    theme: journal
    highligh: tango
    number_sections: TRUE
---

```{r setup, include=FALSE}
knitr::opts_chunk$set(fig.width=12, fig.height=10)
```


# Introduction

This notebook contains 

## Packages

The packages necessary to run this notebook are the following.

```{r pkgs, message=FALSE, warning=FALSE}
library(tidyverse) # data cleaning, wrangling, plotting
library(lubridate) # dates handling
library(patchwork) # plot setup
library(plotly)    # interactive plots

# the tidyverts
library(tsibble)
library(feasts)
library(fable)
library(fable.prophet)
library(fasster)

# modeltime
library(tidymodels)
library(modeltime)
```

## The data

The main data to be used is the daily load from the Western Regional Control Management of CENACE. We also use holidays data as to get insights on the variation.

```{r data - load, message=FALSE}
datos <- read_csv("gcroc.csv")
festivos <- read_csv("festivos_utf8.csv")
datos <- datos %>% 
  left_join(festivos, by = c("fecha" = "fecha"))

datos <- datos %>% 
  mutate(fecha = dmy(fecha),
         dia_semana = factor(dia_semana,
                             levels = c(1:7)),
         Day_type = case_when(
           festivo %in% festivos$festivo ~ "Holiday",
           wday(fecha,week_start = 1) == 1 ~ "Monday",
           wday(fecha, week_start = 1) %in% 1:5 ~ "Weekday",
           TRUE ~ "Weekend"
         ) %>% factor(levels = c("Monday","Weekday",
                                 "Weekend","Holiday")),
         Month = month(fecha, label = TRUE) %>% factor(ordered = F)
         ) %>% 
  as_tsibble(index = fecha) %>% 
  rename(`temp min` = temp_min, 
         `temp max` = temp_max,
         `temp prom` = temp_prom)
datos
```

```{r, include=FALSE}
# To print in LaTeX form
datos %>% 
  as_tibble() %>%  
  mutate(fecha = as.character(fecha)) %>%  slice_head(n = 10) %>% 
  xtable() %>% print()
```

# Exploratory Data Analysis (EDA)

## Visual inspection

We begin by plotting the data in a time plot, as well as in boxplots, first by day type, and then by month, as to check if we can find differences across time.


```{r time-box plots}
g1 <- ggplot(datos,aes(x = fecha, y = MWh)) + xlab("")
# Time plot
p1 <- g1 + geom_line(color = "blue") + 
  ggtitle("Historic Load") 
# Day type boxplot
p2 <- g1 + geom_boxplot(aes(x = Day_type, fill = Day_type)) + 
  theme(legend.position = "none",
        axis.text.x =  element_text(angle = 45,
                                      hjust = 1)) + 
  ggtitle("Load Across Day Types")
# Month boxplot
p3 <- g1 + geom_boxplot(aes(x = Month, 
  fill = Month)
  ) + 
  theme(legend.position = "none",
        axis.text.x =  element_text(angle = 45,
                                      hjust = 1)) + 
  ggtitle("Load Across Months")
# plot group 1
pg1 <- p1 / (p2 | p3)

pg1 + plot_annotation(
  title = " Daily Electricity Load in the GCROC"
)
```

As can be seen from the time plot, the data exhibits a long-term upward trend, as well as indications of seasonality.

By inspecting both boxplots, we see that differences arise both across day types (whether it is a Monday, weekday, weekend or holidays), and across months, further suggesting two types of seasonality: weekly and yearly.

```{r stats}
datos  %>%  
  features_if(is_double,features =
                         list(
       Mean = (~ mean(., na.rm = TRUE)),
       `Std. Dev.` = (~ sd(., na.rm = TRUE)),
       Min = (~ min(., na.rm = TRUE)),
       Median = (~ median(., na.rm = TRUE)),
       Max = (~ max(., na.rm = TRUE)),
       N = length)
       ) %>% 
  pivot_longer(everything()) %>% 
  separate(name, into = c("var","stat"), sep = "_") %>% 
  pivot_wider(names_from = "var",values_from = "value")
```

We check for the relationship between electricity load and temperature across day types.

```{r MWh v. temp, fig.width=12, fig.height=10}
p4 <- datos %>% 
  filter(year(fecha)>2001) %>% 
  ggplot(aes(x = `temp max`, y = MWh)) + 
  geom_point(aes(color = Day_type)) + 
  ggtitle("Daily electricity load as a function of maximum temperature") + 
  xlab("(C°)") + 
  geom_smooth(method = "lm", formula = y ~ x + I(x^2)) +
  labs(color = "") +
  theme(legend.position = "top")
  

p4
```

The data show a u-shape trend. Whenever the maximum temperature is low, the energy demand seems to be higher. The same happens and even in a greater degree when the maximum temperature is high.

From the image above we can see that holidays and weekends seem to be on the lower part of the plot, while mondays and weekdays tend to cluster higher. The separation between them is not as clear, and it could be because of the upward trend over the years.

We could plot each year separately to get a better look at it, or remove the trend from the series.

Plotting each year separately yields the plot below.

```{r}
p4 + facet_wrap(~ factor(year(fecha))) 
```

Removing the trend using an STL decomposition:

```{r}
p5 <- datos %>%
  mutate(detrend = MWh - datos %>% 
           model(STL(MWh)) %>% 
           components() %>% 
           pull(trend)
           ) %>% 
  ggplot(aes(x = `temp max`, y = detrend)) + 
  geom_point(aes(color = Day_type)) + 
  ggtitle("Detrended daily electricity load as a function of maximum temperature") + 
  xlab("(C°)") +
  labs(color = "") +
  theme(legend.position = "top")

p5 + 
  geom_smooth(method = "lm", formula = y ~ x + I(x^2))
p5 + 
  geom_smooth(method = "lm", formula = y ~ x + I(x^2),
              aes(color = Day_type)) 
```

In both cases, we can now see clearer clusters forming, supporting what we saw on the boxplots. Also, adding a global trend line or a trend line for each day type shows a similar pattern.

## Data transformations

We check to see whether performing a transformation to our data could help us reduce irrelevant variation.

First, we take the log values.

```{r}
p1_log <- g1 + geom_line(aes(y = log(MWh)), color = "orchid2") + 
  ggtitle("Log of historic Load") + ylab("log(MWh)")

p1 / p1_log
```

Now, we try using a Box-Cox transformation, choosing lambda with the Guerrero feature.

```{r}
lambda <- datos %>% 
  slice_head(n = nrow(datos) - 30) %>% 
  features(MWh, features = guerrero) %>%
  pull(lambda_guerrero)

lambda

p1_boxcox <- g1 + geom_line(aes(y = box_cox(MWh, lambda = lambda)), color = "turquoise2") + 
  ggtitle("Historic Load with a Box-Cox transformation") + ylab(paste0("Box-Cox using lambda = ",round(lambda,digits = 4)))

p1 / p1_boxcox
```




# `R` Packages for modeling and forecasting

We will produce forecasts using a wide variety of models, with two different `R` packages: `fable` and `modeltime`.

## `fable` and the tidyver**ts** ecosystem

A generalized function to obtain forecasts via `fable` is shown below. One must specify the model with as much customization as needed.

```{r fable-function}
fable_workflow <- function(data = datos, model, lag = 10, dof = 0, horizon = 30, year = 2018){
  # Splitting data into train and test sets
  train <- data %>% 
    slice_head(n = (nrow(data) - horizon))

  test <- datos %>% 
    slice_tail(n = horizon)

  # fitting the model
  fable <- train %>%
    model(model)
  
  print("The fitted model:")
  fable %>% 
    report()
  
  print("Training accuracy:")
  fable %>% 
    fabletools::accuracy() %>% 
    print()
  
  # Residual diagnostics:
  resids <- fable %>% 
    gg_tsresiduals() +
    ggtitle("Residual diagnostics")
  print(resids)
  
  print("Portmanteau tests for autocorrelation")
  fable %>% 
    augment() %>% 
    features(.resid, ljung_box, lag = lag, dof = dof) %>% 
    print()
  
  
  print("The forecast:")
  fable_fcst <- fable %>% 
    forecast(h = horizon)
  print(fable_fcst)
  
  # Plotting the forecast:
  fcst <- 
    # Historic data vs. forecast
    fable_fcst %>% 
    autoplot(data) / 
    # recent history vs. forecast
    fable_fcst %>% 
    autoplot(data %>% filter(year(fecha)>year)) +
    plot_annotation(title = "The Forecast")
  print(fcst)
  
  print("Forecast accuracy:")
  fable_fcst %>% 
    fabletools::accuracy(test) %>% 
    print()
  
}
```



## `modeltime`: and extension to the `tidymodels`


# The models

Below we implement many of the models available in both packages.

## Time series decomposition {.tabset .tabset-fade}

### .

There are many decomposition methods.

### `fable`

#### Classical decomposition

```{r}
datos %>% 
  model(classical_decomposition(MWh, type = "multiplicative")) %>% 
  components() %>% 
  autoplot() + xlab("year") +
  ggtitle("Classical additive decomposition")
```
#### STL decomposition

An STL decomposition is obtained with the following code.

```{r}
datos %>% 
  model(STL(MWh)) %>% 
  components() %>% 
  autoplot() + xlab("year") +
  ggtitle("STL decomposition")
```

The multiple STL decomposition can be customized. In this case, we want to make the trend smoother.


```{r}
datos %>% 
  model(STL(MWh ~ trend(window = 183))) %>% 
  components() %>% 
  autoplot() + xlab("year") +
  ggtitle("STL decomposition")
```

We could set the seasonality to be fixed specifying `season(window = 'periodic')`.

```{r}
datos %>% 
  model(STL(MWh ~ trend(window = 183) + 
              season(window = 'periodic'))) %>% 
  components() %>% 
  autoplot() + xlab("year") +
  ggtitle("STL decomposition")
```

* Modeling and forecasting using STL and ARIMA:

```{r}
fable_workflow(data = datos, 
               model = decomposition_model(
                 STL(box_cox(MWh, lambda = lambda) ~ trend(window = 183)),
                 ARIMA(season_adjust ~ PDQ(0,0,0))
               ))
```

* Using STL and ETS:

```{r}
fable_workflow(data = datos, 
               model = decomposition_model(
                 STL(box_cox(MWh, lambda = lambda) ~ trend(window = 183)),
                 ETS(season_adjust ~ season("N"))
               ))
```
* Using STL, ARIMA, fourier:

```{r}
fable_workflow(data = datos, 
               model = decomposition_model(
                 STL(box_cox(MWh, lambda = lambda) ~ trend(window = 183)),
                 ARIMA(season_adjust ~ PDQ(0,0,0)),
                 TSLM(season_year ~ fourier(K = 10, period = "1 year")),
                 TSLM(season_week ~ fourier(K = 2, period = "1 week"))
               ))
```


### `modeltime`

### {-}
## Exponential smoothing {.tabset .tabset-fade}

### .

### `fable`

```{r fable-ETS}
fable_workflow(model = ETS(MWh, opt_crit = "mse"),
               horizon = 30,
               year = 2018)
```



### `modeltime`

### {-}

## ARIMA {.tabset .tabset-fade}

### .

### `fable`

```{r fable-arima}
fable_workflow(model = ARIMA(MWh),
               horizon = 30,
               year = 2018)
```



### `modeltime`

### {-}

## Linear regression {.tabset}

### .

### `fable`

```{r}
fable_workflow(model = TSLM(MWh ~ trend() + season()),
               horizon = 30,
               year = 2018)
```


### `modeltime`

### {-}

## Dynamic regression {.tabset}

### .

### `fable`

```{r}
fable_workflow(model = ARIMA(MWh ~ trend() +
                               season() +
                               PDQ(0,0,0)),
               horizon = 30,
               year = 2018)
```


### `modeltime`

### {-}


## Harmonic dynamic regression {.tabset}

### .

### `fable`

```{r}
fable_workflow(model = ARIMA(box_cox(MWh, lambda) ~ fourier(period = "1 week", K = 2) + 
                               fourier(period = "1 year", K = 15) +
                               PDQ(0,0,0)),
               horizon = 30,
               year = 2018)
```


```{r}
fable_workflow(model = ARIMA(MWh ~ fourier(period = "1 week", K = 2) + 
                               fourier(period = "1 year", K = 15) +
                               PDQ(0,0,0)),
               horizon = 30,
               year = 2018)
```

### `modeltime`

### {-}

## Prophet {.tabset}

### .

### `fable`

```{r}
fable_workflow(model = prophet(MWh),
               horizon = 30,
               year = 2018)
```

### `modeltime`

### {-}

## FASSTER {.tabset}

### .

### `fable`

```{r}
fable_workflow(model = fasster(MWh ~ fourier(period = "1 week", K = 2) +
                                 fourier(period = "1 year", K = 10)),
               horizon = 30,
               year = 2018)
```


### `modeltime`

### {-}

## Neural network autorregresion {.tabset}

### .

### `fable`

```{r}
fable_workflow(model = decomposition_model(
  STL(MWh ~ trend(window = 183)),
  NNETAR(season_adjust)
),
               horizon = 30,
               year = 2018)
```


```{r}
fable_workflow(model = NNETAR(box_cox(MWh, lambda = lambda)),
               horizon = 30,
               year = 2018)
```


### `modeltime`

### {-}

## The proposed model


# Comparison across models

## Training adjustment

## Forecasting adjustment

## Cross-validation

## Forecasts


# Conclusions

